The bravery and limits of artificial intelligence pointed out by researchers

This Monday 2 May at 9:10 pm the first issue of “l’Hôtel du temps”, presented by Thierry Ardisson, will be broadcast. This biography 2.0, revives disappeared characters through artificial intelligence. A new type of program poses questions to researchers from the University of Limoges.

On the night of May 2 to 3, 1987, Dalida disappeared. After 35 years, this legendary singer is about to re-appear on television, in a new genre program called “l’Hôtel du temps”.

Based on the technique of deepfake or hyperfake – used until now to transform videos a lot in a humorous way – this program consists in bringing to life disappeared characters like Dalida, Colucci or even Jean Gabin.

Behind this technology: the concept of artificial intelligence, well known from the master’s degree in “Computer Science, Image Synthesis and Graphic Design” ISICG, from the University of Limoges.

This unique master in France trains students to create and transform images primarily for video games and cinema. “The training is geared towards the fields of photomontage and graphic design.explains Jamsheb Ghassan-Farbour, who has been in charge of this master for 20 years and has specialized in photomontage since 1982.

“In particular, our students are trained in deep learning technology, which consists of developing computational models to model data. This technology is the foundation of DeepFakes,” he adds.

The term deepfake or hyperfake is actually a combination of “deep learning” and “faking”. Based on artificial intelligence, this technology makes it possible to simulate real characters.

In this master, Frédéric Claux, an image research teacher is responsible for giving lessons on this technique. Explains how it works:

It’s an automated program that can run practically in real time if you have a very powerful hardware. At first, it will analyze the video to be processed, isolate the face, and discard everything that surrounds it. This step is very important, because we only want to modify the face.

Frédéric Klaaux, Research Professor of Imagery at the University of Limoges

next one,In another video you have a second face that interests you, it’s the target face. In the same way the program will isolate the face. Then, when you have two videos with only the face, you will pass them through what is called an autoencoder which is a program that detects the mode of the two learned videos and allows you to go from the source face to the target face. For this to work, the two faces have to be very close.”.

The program announcement will be broadcast in France 3 on Monday, May 2 at 9:10 pm:

We can go too far in terms of realism. There are not many technological limitations, information technology has no limits.”And adds Jamsheeb Ghazan Farbour.

For those computer geeks, this show is “Technical prowess, I’ll stick to my TV”, He trusts Frederic Klaaux’s laughter.

But others, on the other hand, do not see this concept very favorably. This is the case of Nicole Begnier, Professor of Information and Communication Sciences at the University of Limoges.

In recent years, she has done extensive research on deepfake technology.

In my opinion, the phenomenon of deepfakes, fake photos or fake videos raises the question of what the statements mean. When someone talks, sings, or paints, they are expressing themselves or expressing something. This is no longer the case here. We make him say something.

Nicole Benner, Professor of Information and Communication Sciences at the University of Limoges

before adding”Dalida is not asked for her opinion if she agrees. We’ll have fun, make her sing, make her say things, but behind that comes a question of respect for the one who’s gone.”.

Thierry Ardisson defends himself, “There are no image rights for deceased persons. But each time I communicated with the heirs, out of respect, out of morals and because they also help me with the script. To write texts, I It gives topics (origins, childhood, studies, etc.) to librarians who collect all sentences already spoken or written.”

The presenter, directed by Serge Khalfoun and produced by 3ème Œil Productions, said in an interview with France 3, “I am the first to use deepfakes in a positive and cultural way. againstIt is a tool, not an ideology..

Statements contradicted by Nicole Begnier: “CNot entirely true. Behind all this is the ideology of neoliberalism designed to tame and frame human activities. Thierry Ardisson has forgotten that producing an image, producing a song, and playing a role is at the same time to signify, to the meaning of something, but to weave a human bond in society.”

“Instead of offering singers and actors to put themselves in each other’s shoes to play others, we’ll rely on technological prowess that will seem more real than real, but do we really need that to honor someone? It’s not a revolution, it’s another step toward this world above ground, we We move toward word flows that have little to do with our reality.”, She adds.

Behind this concept, one last question, is deception. Is it possible to detect videos created with this technology?

Of course, Frederic Klaaux explains, You use the same principle, you take a fake video, you isolate the face, you tell the program, this is fake and you do the same with a video that is not fake. You reproduce the exercise thousands of times, and you will eventually be able to spot the fake videos.”.

As much as it is criticized and praised, deepfakes do not leave anyone indifferent. Tonight, more than ever, Dalida’s words and words did not end poisoning us.

Leave a Reply

Your email address will not be published.