Published on 24 January 2024

Understanding AI starts with the correct definition

Interview with Fayçal Boujemaa, Technology Strategist working in the Data & AI team at Orange Innovation.

illustration de point d'interrogation
Photo de Fayçal Boujemaa

Fayçal Boujemaa,
Technology Strategist
working in the Data & AI team
at Orange Innovation

 

How can we define artificial intelligence?

Fayçal Boujemaa : The goal of AI is to use machines to reproduce human cognitive skills (seeing, listening, reasoning, speaking, learning, etc.) to support decisions and automation. To achieve this, a whole range of sciences and technologies are used, such as computer vision, speech synthesis and recognition, knowledge representation, machine learning, and more.

Recent events could suggest that this is a new topic; however, it first appeared in 1950, in a famous research article by Alan Turing, undoubtedly one of the most prodigious mathematicians of the 20th century. He provided the first insight into to the question of whether “machines can think”.

It seems important to me to state that AI is not a technology that will replace humans. A part of us is elusive, comes from the unconscious. All of this cannot be reproduced. It is still a mystery as to how this affects our behavior.

What about generative AI? 

F.B. : Generative AI (GenAI) targets content creation, whether text, images, video, music, or code, from a text request (a "prompt"). It is a subset of the AI field that uses deep learning and reinforcement learning techniques, which date back to Google research carried out in 2017 . The milestone OpenAI achieved in November 2022 was to make GenAI accessible to as many people as possible, both businesses and consumers. Google and OpenAI remain tech pioneers through Bard and Gemini for the former, and ChatGPT for the latter.

In addition to these two major players, without being exhaustive, we can mention: Meta, Hugging Face, Anthropic, Microsoft, Midjourney, Stable Diffusion, and Mistral, etc.

How does generative AI work? 

F.B. : Like any AI, GenAI goes through a first phase: “learning”, before entering a second phase: “usage ” where it is made available to users. Usually, it continues to learn during this second phase.

A top level explanation is that GenAI first learns by scrutinizing web content produced by humans (billions and billions of web pages, including Wikipedia, e-books, and electronic newspapers...). It then uses statistics and probabilities to ascertain where a word might appear in a given context (the words that precede it). Then, the GenAI tool can answer a user’s question word after word, based on the principle of the “most likely next word” (according to the statistics and probabilities established during the learning phase). It should be noted that this principle is used for text or code generation. For image or video generation, the process is slightly different.

How does generative AI currently fall short?

GenAI responses may lack originality and/or accuracy. In other cases, we can sometimes see hallucinations.

Fayçal Boujemma, Technology Strategist, Orange Innovation / Data & AI

 

F.B. : While there is considerable progress, there is still more ground to cover. GenAI responses may lack originality and/or accuracy, and therefore contain outdated and/or false information. This is because GenAI tools are not trained or updated on a daily basis. The large computing capacity required is such that complete updates or refreshes require several weeks or even months to complete.

In other cases, we can sometimes see hallucinations, with answers that seem consistent but are actually false. This phenomenon is due to a “derailment” of the “most likely next word” during the process described above. Sometimes the “next word” is wrong and when it cascades down the line, the answer is quite far from the facts or truth. 

What’s more, as mentioned above, generative AI tools learn and produce content by imitation, sourced from an imperfect open web. As a result, they can sometimes reproduce fake news and/or conspiracy theories or sexist or racist biases, etc. This means audio, images, or video content can unfortunately turn out to be deepfakes when left in the wrong hands.

Another thing to watch for is energy consumption. Both in the “training” phase and the “use” phase described above, AI/GenAI require a lot of computing which therefore induces enormous energy consumption. These defects are addressed by several upcoming innovations, including Green AI and new processors optimized for AI/GenAI.

Finally, GenAIs pose significant problems in terms of intellectual property because they use web data to learn and to create content, without necessarily having been authorized to use it by their rights holders.

What skills are needed? 

Best practice when using GenAI requires essential know-how to write or state questions (prompts) correctly to get to what’s important.

Fayçal Boujemma, Technology Strategist, Orange Innovation / Data & AI

 

F.B. : There are two separate fields of expertise. The first is in terms of building AI/GenAI “engines”, which is mainly the domain of scientists (researchers and engineers) using skills such as data science, data engineering, business expertise, etc. The second is in terms of using these AI/GenAI engines, which concerns anyone using these technologies in finance, HR, content creation, law, sales, (IT) programming, in addition to the scientists and engineers (who are also users).

Best practice when using GenAI requires essential know-how to write or state questions (prompts) correctly to get to what’s important. To do so, a new profession emerged with the arrival of OpenAI's ChatGPT at the end of 2022: the prompt engineer.

Can we predict what the next steps will be? 

F.B. : This is almost impossible given the speed at which AI and GenAI are advancing with the development of increasingly powerful and accurate multimodal GenAI engines. We’re seeing new ground broken at a speed rarely seen in the history of technology. With these astonishing and unforeseen developments, we’re entering an uncertain world where forecasting and predictions are becoming difficult, and where we have to stay agile.

What we can try to describe, however, is how AI is likely to evolve. In the beginning, we talked about weak or narrow AI. GenAI takes us towards what could one day become general or strong AI  which could reach a level comparable to that of human intelligence. We’re not there yet as AI may need to be combined with the capabilities offered by quantum computers. This is a new generation of computers , still in the lab, with a performance ranging from a few hundred thousand times to a few million times that of today’s computers. Some people think it will emerge by 2030 or even sooner, but there’s no real evidence yet. Combining the power of quantum computers and AI will undoubtedly lead to a world that is hard to imagine with AI technology that could go beyond our own (human) capabilities: the era of Super AI.

It’s essential we never lose sight of the need to ensure that AI is and remains ethical, responsible, and trustworthy, to benefit everyone. As we said at the very beginning, the fact that part of us remains elusive and mysterious is precious!