Published on 24 January 2024

Artificial intelligence and ethics: best practices for companies

Joint interview with Laetitia Orsini-Sharps, EVP of Consumer Business at Orange and former President of the Positive AI coalition, and David Giblas, Deputy CEO of Malakoff Humanis and current President of Positive AI.

Illustration d'un fond étoilé
 
Miniature David Giblas

David Giblas,,
Deputy CEO of Malakoff Humanis and current President of Positive AI

Miniature Laetitia Orsini-Sharps

Laetitia Orsini-Sharps,
EVP of Consumer Business at Orange and former President of the Positive AI coalition

 
Illustration d'une silhouette allongée

What is the vision and purpose behind Positive AI?

While 84% of business leaders say that responsible AI should be a priority, less than 20% of companies believe that they have implemented a program with a good level of maturity.

Laetitia Orsini-Sharps, EVP of Consumer Business at Orange and former President of the Positive AI coalition

Laetitia Orsini-Sharps : The coalition came from a meeting between our Chief Data Officers in 2020 as part of the work carried out by the European Commission to define the foundations of AI ethics. We observed the need for discussion along with operational tools to implement the Commission’s recommendations.

Positive AI was set up by and for people working in AI to help them integrate it into consumer products and services. 

While 84% of business leaders say that responsible AI should be a priority, less than 20% of companies believe that they have implemented a program with a good level of maturity so there’s a lot of room for improvement.

David Giblas : We know not all companies have achieved the same level of understanding or progress regarding responsible AI, which is why Positive AI is both a community and collaborative platform.

This allows us to share best practice and our experiences and understanding of AI and its uses (e.g. via webinars). It is also an effective tool for co-producing guidelines and tools (training, benchmarks, labels), to help organizations and provide solutions to implement responsible AI.

Finally, Positive AI contributes to the public debate on AI regulation and development by bringing us closer to other public and private stakeholders, both French and foreign, to pool our progress and create a common approach at the European level.

What are the practicalities involved

Companies must ensure AI is not perceived as an inaccessible and incomprehensible black box.

David Giblas, Deputy CEO of Malakoff Humanis and current President of Positive AI

D.G : First of all, companies are concerned about their employees. It’s essential to explain that using AI is not intended to replace or control their teams, but, on the contrary, to help them automate tasks to focus on higher value work. 

Companies are also liable to their customers and end users and must consider where it’s not appropriate to use AI. In healthcare, for example, when it comes to work stoppages, control measures must be decided by a person and not by AI. Companies must also ensure that there is no bias in the tech during recruitment, or anti-fraud controls for example. 

Finally, companies must be very transparent in ensuring that any use of artificial intelligence is explained as well as possible, so it is not perceived as an inaccessible and incomprehensible black box. Wherever it is deployed, whether in marketing, customer relations, or HR, it is fundamental to remain attentive.

L.O-S : Let’s take Orange as an example. How we use AI must resonate with our purpose as “a trusted partner that gives everyone the keys to a responsible digital world” and therefore responsible technology. As David says, humans must maintain their essential role and control. When it comes to recommending certain TV channel packages, for example, we had to de-bias our systems to ensure sport was not gendered. This has very interesting effects on our commercial performance by increasing our sales to new audiences. Another example is the fact we don’t use generative AI for content or creating images or videos of humans that do not exist.

What are the main challenges? 

L.O-S : You have to have the right measures in place, starting with introducing sound governance at all levels. For example, we have a Data and AI Ethics Council at Group level, which is an independent advisory board of 11 external professionals. Their role is to define an ethical framework in line with our purpose. We have also set up an ethics committee within Orange France that meets every three months to measure our maturity and decide on the next steps. Finally, all projects that use AI need to pass a Responsible AI milestone early in their development process to start off on the right foot. 

There’s also the matter of training, and not only data experts. We have developed three types of training: a two-hour training dedicated to Top Management, called Ethical Leaders; a three-hour Fundamentals training for business managers; and a three-day training aimed at data experts.

Finally, it is also essential to mobilize the right resources to apply regulation, just as we did with the European Union’s General Data Protection Regulation to protect personal data.

D.G : There is no question that any governance for AI usage must also be excellent. This starts with selecting use cases and choosing the data that can be used with the right goals pursued. Introducing an incubation phase ensures the results of projects can verified before launch. For example, at Malakoff Humanis, our committee meets twice a year to review the results of new projects, decide what will continue, and select new use cases to be tested.

How do make sure ethical AI is a sustainable success? 

L.O-S : The main purpose of Positive AI is to onboard the company, employees, and managers. To achieve this, we must first show great humility. An audit phase is a key step, as it makes it easier to understand what needs to be done.

D.G : Unlike traditional computer science, the models used can drift and generate biases over time. This requires constant attention and long-term efforts because nothing can ever be taken for granted. 

What tools and techniques does Positive AI offer its members? 

Positive AI is not only dedicated to AI experts or the most cutting-edge companies. Our mission is to bring together the widest possible network and work with their employees from a whole variety of professions.

Laetitia Orsini-Sharps, EVP of Consumer Business at Orange and former President of the Positive AI coalition

D.G : First of all, Positive AI supports its community of members through webinars, meetings, and co-construction workshops. It’s essential that Positive AI is not only dedicated to AI experts or the most cutting-edge companies. Our mission is to bring together the widest possible network and work with their employees from a whole variety of professions.

Then comes the reference framework so companies can compare how they stand. It’s made up of a whole set of questions about AI governance along with each of the seven themes selected by the European Commission. It’s a dynamic tool because each question refers to a whole set of tools and serves as a guide to determine what needs to be implemented concretely. 

All of this necessarily entails training, as we were talking about earlier, from leaders to employees and managers. The final step for the company is to obtain the label, which is why audits are carried out by an independent external body. They focus on governance and the models used by artificial intelligence. 

Faced with upcoming regulations, what priority topics should be debated publicly?

It would be a shame if the European Union’s power were limited to its ability to regulate AI without having powerful European companies working in the field.

David Giblas, Deputy CEO of Malakoff Humanis and current President of Positive AI

L.O-S : I think it’s important not to forget the value created by using AI. Secondly, regulation means auditing the models used by AI so it’s essential to consider both the framework and the skills that must be employed to support adoption. 

D.G : Finally, whether we like it or not, AI has been embraced globally within the services offered by American, Chinese, and European companies. Europeans are already using them, and their data is accessible by the companies operating these services. It would be a shame if the European Union’s power were limited to its ability to regulate AI without having powerful European companies working in the field.

Institut Montaigne has launched Objectif AI. How do you complement each other?

L.O-S : Objectif AI is aimed at a very wide audience to develop a kind of AI culture where we can understand what it is and how it works, in an accessible and concrete way. Positive AI specifically targets the ethical issues surrounding artificial intelligence within the world of business.  

D.G : Making progress in these two fields will help us address many challenges in this area. More broadly, Positive AI and its members are also keen to develop links with our sister associations in other countries of the European Union, as is already the case with our German counterparts.