Published on 24 January 2024

Responsible AI: what challenges should we prepare for?

As AI-based solutions are deployed across all sectors, demonstrating a wide range of new possibilities, what key use cases are developing in our personal and professional spheres? How can we ensure they are a force for good and mitigate their intrinsic risks? Steve Jarrett, Chief AI Officer at Orange Innovation, and Brian Naughton, Data Scientist at Google, share their thoughts.

Illustration d'une vague
Photo Brian Naughton

Brian Naughton,
Data Scientist
at Google

Steve Jarrett miniature

Steve Jarrett,
Chief AI Officer
at Orange Innovation

Illustration de 3 silhouettes

How do you think AI benefits users? 

Steve Jarrett : At Orange, our AI ethos is that it should be used to give users superpowers to make their lives easier, both professionally and personally.

As an operator, AI will enable us to be more efficient, ensuring our network is extremely reliable and secure and our interactions with customers are more efficient and meaningful.

The technology can assist in the data we use in all aspects of daily life. I see only one condition: that it is used responsibly, not only from an environmental point of view, but also in terms of ethics, data protection, and regulatory compliance.

Brian Naughton : Generative AI will have a massive impact on all industries. This technology is being adopted at an unprecedented rate, which also brings greatly increased user expectations and benefits. GenAI makes our daily tasks a lot more efficient. For example, 88% of developers say they have already seen an increase in productivity through the use of generative AI in their work, as well as an improvement in the quality of the code produced. Some analysts compare what we are experiencing to the next phase of the industrial revolution, the phase where humans interact with technology intuitively.

What use cases do you anticipate?

“In the professional sphere, the top three areas of use in 2024: interact naturally with technology, software development buddies, and improving efficiency.”

Brian Naughton, Data Scientist at Google

B.N : In the professional sphere, the top three areas of use in 2024 will likely be customer service chatbots that enable humans to interact naturally with technology, software development buddies to make everyone a better developer, and improving efficiency through general automation of information retrieval and recommendations.
In the personal sphere, there are some really interesting use cases out there already:

  • Uber Eats has launched a GenAI assistant that suggests restaurant deals and easily reorders your favorites on the menu. Soon the assistant will help you plan your meals and find sales on grocery items and order ingredients from a recipe.
  • Another use of GenAI, that I am really looking forward to, is to help us navigate the sea of content available on streaming platforms in the form of a personalized suggestion service. The assistant learns what you like and, based on your current mood, the platform will offer you a movie so you can avoid spending ages endlessly scrolling to find what to watch! 
  • A final, exciting use case is in the education sector: students will soon have their own private GenAI tutors that can help them learn at their pace, presenting information in an ultra-personalized way according to their needs.

S.J : At Orange, we’re working on three main categories of use. 

  • Firstly, Smarter Networks and using AI to optimize our investments by trying to predict how to deploy our networks in the most cost-effective way. AI also helps us detect anomalies in our networks before users notice them, helping to improve perceived quality. Finally, we have developed “green” use cases, such as optimizing battery consumption.
  • We can also improve our customers’ experience, both in interactions with bots but also with our contact center agents. AI can transcribe the discussion, help diagnose the problem, and suggest the solution in real time. It can also help us identify their preferences to make relevant personalized marketing recommendations 
  • Finally, and more broadly, AI can help us improve our operational efficiency. Like every company, Orange has an incredible amount of data at its disposal which we could leverage better in a number of professions. Our ambition is to dispel this fog around information and break down silos to enable what we call a Data Democracy so that everyone can benefit. Thanks to AI, our employees will be able to solve their problems faster by asking questions based on natural language.

How do you choose the most relevant use cases to focus your efforts on? 

B.N : Traditional AI use cases were more focused on problems that require well-defined rules and patterns (trading, fraud detection, etc.). Generative AI on the other hand is better suited to problems that require creativity and then the ability to generate new ideas. In general, traditional AI is more task oriented and GenAI more creative and exploratory.  There are areas where both traditional and generative approaches can complement each other, for example demand forecasting and recommendation engines. 

S.J : The trick to creating really compelling products is to think about the things that can make a difference and change lives. At Orange, we think about how relevant these products are from the outset, both in terms of use and commercial value for the company. We employ a Test & Learn approach to test out a lot of ideas to see if they’re useful and meet a need. If the conditions are right, we invest resources to create prototypes before scaling up these solutions. 

Many users are worried that AI will have negative effects. Are these concerns justified?  

In the short term, our priority is to ensure that we use AI responsibly, paying particular attention to the issue of bias and data protection.

Steve Jarrett, Chief AI Officer at Orange Innovation

S.J : Whenever a new technology can significantly change people’s lives, our duty is to take a measured approach. In the short term, our priority is to ensure that we use AI responsibly, paying particular attention to the issue of bias and data protection. These are significant challenges to overcome. Another major challenge is that AI models generate so-called hallucinations: very convincing answers that are not always accurate so we have to know we can trust them.

That’s why, we have set up a Data and AI Ethics Council, made up of a dozen experts, which advises us on how to approach the most complex situations. Beyond this governance process, we invest in training to ensure that all Orange employees who use any AI tools are aware of the potential risks. We believe humans are still central to the technology and its benefits, and we need to make sure we stay in control.

B.N : Google has published a set of AI principles on its website that details its commitment to developing safe and responsible AI. Explaining the AI is a key factor: understanding why models generate certain types of responses can improve them. We are also committed to not developing AI that could harm or harm people. 

Internally, Google’s development teams are responsible for assessing and mitigating the risks of the models they create, fully understanding how users might use them. We also provide our clients with tools to monitor responses and filter out abusive texts, for example. Our role is to put safeguards in place to mitigate the risks of AI. 

What are the conditions necessary to ensure responsible AI? 

B.N : It requires a concerted effort between researchers, developers, and policymakers. Taking stock of innovation and model development is one thing, but it is also necessary to look at research and tools to evaluate these models in order to understand their limitations and avoid abuse. It’s all happening very fast but I’m sure we’re going to get there. 

S.J : I’m pleased with the initiatives and reflections underway, especially from the European Union, to address this subject in a cooperative and open way. The fact that the industry is able to mobilize so quickly makes me very optimistic.

What does the ongoing partnership between Orange and Google aim to achieve in the field of AI? 

S.J : Google has set up an artificial intelligence platform called Vertex AI. It is a unified environment in which we can work on models provided by Google as well as by other players including open source models from Meta, Anthropic, and Mistral. Google also provides training content and expert advice in many of our fields. 

More specifically, we are working on two projects: 

  • On-premises software and hardware solutions for AI data and models that can’t be moved to a public cloud in France. 
  • Speech recognition. Orange operates in 26 countries with a multitude of languages that are not well understood by AI today. By collaborating with the Google Speech team, we hope to achieve real-time recognition of all the languages spoken within the Group. 

Brian, why did you want to work with Orange?

B.N : Because Orange is one of our most active clients in terms of AI and has taken an undeniable lead on the subject of generative AI with a particularly innovative positioning.  

Steve, why did you choose Google?

S.J : Because we think Vertex AI is the perfect environment to test out different types of models. Other key elements are our aligned corporate cultures, where we both focus on innovation and collaborative relationships, and our common desire to improve people’s daily lives. 

To explore new AI use cases, we work closely with key ecosystem partners from start-ups to tech giants such as Google and Meta. The LLaMA open source GenAI model is currently being tested within the Group with a focus on improving customer experience and increasing our operational efficiency.