Published on 04 February 2020

AI: can it be trusted?

Artificial intelligence technologies mimic human cognitive capabilities through machines (or computers): perception of environment, understanding natural language, learning and reasoning. The objective? To help us automate tedious tasks or make better decisions, in our personal or professional lives. AI has enabled the emergence of automatic translators, voice assistants, humanoid robots and even autonomous vehicles. But how does AI work and what are the issues involved, especially the ethical ones? Should consumers trust AI?

 

How does AI work?

Just like humans, machines incorporating AI have to learn to become intelligent. This machine learning is based on different techniques. The most common is supervised learning, where machines learn from examples or real-life experiences in order to better predict the future and make better decisions. This learning is referred to as “deep learning” when it uses an artificial neural network structure.

 

The ethical issues of AI

AI’s artificial neural networks, which are powered by servers running huge databases, raise ethical questions. In France, a recent report* by CNIL (Commission Nationale Informatique et Libertés) states: “The questions raised by algorithms and artificial intelligence have to do with societal choices, and have implications for all citizens”.

There are three main fears, commonly shared around the world regarding possible AI abuses: the risk of intrusion on privacy caused by a disproportionate collection of personal data (mass surveillance); the risk of extreme “normativity” (recommendation engines influencing individuals to conform to a particular bias); and the risk that AI will replace large parts of the workforce.

For CNIL: “The most competent voices are rising up to stamp out such predictions, compared at best to fantasies and at worst to lies”.

These concerns arrive in tandem with fake news reported around AI with cyberattacks or even robot soldiers powered by AI.

To regulate the risk of social “drift” or shift, significant regulatory work around AI is underway at European level. Orange contributes to this with other industry players. The aim is to coordinate the approach of all member states on the ethical implications of AI:

 

But can’t you already trust AI?

AI, is already bringing us countless benefits. It enables us to generate and sort data and therefore share knowledge. It can more easily match people and skills. It can anticipate certain events and alert us to them. It can develop recommendations and help us make decisions in complex situations. The list of applications continues to grow: including medicine, manufacturing, robotics, voice assistants, chatbots, education, leisure, gaming, autonomous vehicles and more.

 

 

Orange has a long-standing interest in AI and works with more than 130 internationally-recognised AI specialists. Several projects have already been developed across the business, in areas such as networks, customer relations (using conversational robots) and healthcare.

Not only do we want AI to be useful and used responsibly, but we’re also asking along with other industry players how it can promote digital equalityStart-ups are already using AI to innovate and enable social progress. Here are some recent examples of international start-ups accelerated by Orange Fab:


 

So, AI, should you trust it?

According to the November 2019 Ifop survey “Awareness and image of Artificial Intelligence”: 71% of French people had a positive impression of AI and more than half (58%) said they trusted it. However; Thierry Taboy, coordinator of the Impact AI Observatory on Health and Well-being at Work also detects some concerns, which reveal a “huge need for acculturation on the different forms of AI and their concrete applications”.

This leads us to believe that AI can be trusted if it is properly supervised, as already experience us on a daily basis. But for it to continue to support people and enhance their well-being, we need to learn more about it, understand its strengths and harness the benefits it can bring us.

 

 

Did you know?
AI and OPAL ("Open Algorithms"): a huge non-profit innovation project

OPAL is a platform created by a group of partners including the MIT Media Lab, Imperial College London, Orange, the World Economic Forum and Data-Pop Alliance. OPAL’s objective is to establish governance and a set of algorithms that comply with ethical standards.

The OPAL platform aims to unlock the power of private data for the purposes of public. Private companies wishing to contribute AI technologies for the common good (AI4Good) can download these algorithms and use them within their company (data privacy is preserved), in order to securely compile and analyse all or part of the data they hold.

The results are then made available to public authorities to give them a better picture of human reality and for uses strictly dedicated to public well-being. For example, they can be used to inform decisions in the context of fighting poverty, inequality, disease, crime, urban congestion and also town planning.

The initiative began in 2017 with two pilot projects, including one in partnership with the Senegalese government and the operator Orange-Sonatel.

Find out more: OPAL