Le monde selon l’IA (en)
Catégorie | Exercices |
---|---|
Statut | Préparé |
Exhibition au Jeu de Paume
Questions and answers (English)
Artificial intelligences (AI) are not super brains. They are made up of mathematical algorithms (like calculation recipes) which, once trained, learn to perform tasks independently. The various AI models are used in many fields to analyze information, write or translate texts, generate images... By following the 15 questions of this path, explore the exhibition at your own pace and discover the artists who experiment with AI and question the world around us.
46 What is the weight of an AI?
We talk about the "cloud," but computers don’t float in the air! To store their content, AIs need heavy computer servers that use rare metals and consume a lot of energy. These machines, which carry out billions of operations, need a huge amount of water to cool down! To testify to this, artist Julian Charrière collected hard drives, smartphones, and other digital waste, and covered them in molten lava to create these stone blocks.
The weight of an AI is not only virtual or symbolic. Behind the immaterial image of the "cloud" lies a very concrete and physical reality. Artificial intelligences rely on heavy computer servers made of rare metals that consume large amounts of electricity and require a lot of water for cooling.
This invisible infrastructure mobilizes considerable natural resources: lithium, rare earths, water, oil, etc. Artist Julian Charrière, in his works Metamorphism and Buried Sunshines Burn, illustrates the materiality of AI by transforming digital waste (hard drives, motherboards, smartphones...) into molten mineral blocks, thus revealing the ecological and geological weight of our technologies.
Thus, the "weight" of an AI can be measured in extracted resources, energy consumed, and real environmental footprint, far from the illusion of a dematerialized digital universe.
49
Julian Charrière
Born in 1987 in Morges (Switzerland)
From left to right:
Buried Sunshines Burn | 5FO.P3K, 2023
Buried Sunshines Burn | 489.SLQ, 2024
Buried Sunshines Burn | 5G1.02H, 2024
Buried Sunshines Burn | 5D1.K91, 2024
Heliographs on highly polished stainless steel plates
67 Can an AI recognize me?
You’ve probably seen cameras in the streets, in stores, or other buildings. Some use AI to detect our faces, try to identify us, spot our expressions or even predict our behavior. How can we protect our privacy and freedom? There are laws in Europe to protect our data: the GDPR (General Data Protection Regulation). Approach this work by Trevor Paglen and observe which words the machine associates you with!
Yes, an AI can recognize you, but not like a human would. Some cameras, in streets or public places, are connected to artificial intelligence systems capable of analyzing faces. These AIs can detect features, guess emotions, or even predict behaviors.
Psychopath: an anim that stops people based on their behavior
In his work Faces of ImageNet, artist Trevor Paglen shows how an AI can capture your image and automatically associate you with keywords, based on huge databases like ImageNet, used to train artificial vision systems. But these associations are not neutral: they can be biased, mistaken, or unfair.
That’s why laws like the GDPR in Europe exist to protect your privacy and your personal data. They help limit surveillance and regulate the use of these technologies.
Being recognized by an AI also raises the question of what it sees… and what it thinks it knows about us.
68
Trevor Paglen
Born in 1974 in Camp Springs (Maryland, USA)
Faces of ImageNet, 2022
Interactive video installation
Courtesy of the artist, Altman Siegel, San Francisco, and the Pace Gallery
In this interactive installation, a hidden camera films the viewer standing in front of the screen, where their face appears, becoming an object of recognition and labeling. The keywords used for recognition are linked to image clusters from the controversial ImageNet dataset, which played a major role in the rise of artificial vision systems during the first half of the 2010s.
https://www.youtube.com/watch?v=K_lN4JD1sBY
Excerpt with the machine: https://youtu.be/K_lN4JD1sBY?si=1iU8q8L7KXGV2gsA&t=64
https://paglen.studio/2020/04/29/imagenet-roulette/ : people are seen by category
84 Do AIs work on their own?
Millions of people work to detect and sort online content, label objects in images, or remove violent content. This repetitive work is very poorly paid. Artist Agnieszka Kurant created this portrait using 10,000 selfies sent by these “ghost workers” who are never seen. She shares with them the proceeds from the sale of her work. The artist collective Meta Office shows us photos of their workplaces in China, India, Africa...
No, AIs do not work on their own. Behind the impressive performances of artificial intelligences, there are millions of people, often invisible, doing essential but poorly valued work. These “click workers” sort, label, and filter images, validate or censor content: they train AIs, correct them, and supervise their results.
Artist Agnieszka Kurant wanted to make these people visible by creating the work Aggregated Ghost, a collective portrait generated from 10,000 selfies sent by anonymous workers. She even shares the profits from her work with them.
The Meta Office collective also documents the precarious conditions in which these microworkers operate, particularly in China, India, and Africa.
In other words, if AI seems magical, it’s because it hides an army of human workers, often underpaid and forgotten. Without them, AI could not see, understand, or decide.
85
Meta Office
Studio created in 2023 in Rotterdam (Netherlands), composed of Lauritz Bohne (born in 1994 in Düsseldorf, Germany), Lea Scherer (born in 1996 in Vienna, Austria) and Edward Zammit (born in 1997 in Pietà, Malta)
Meta Office: Behind the Screens of Amazon Mechanical Turks
2021–2025
Multimedia installation, data visualization
The installation was born from a dialogue between the Meta Office collective members and workers on the Amazon Mechanical Turk microtask platform. For this project, the “click workers,” usually employed to index training dataset images or moderate AI-generated content, provided descriptions of their working conditions. Their responses and the accompanying images offer a glimpse into their geographic diversity, disparities in pay, and working conditions, which often rely on improvisation and affect people’s lives once marked by the seal of exploitation by multinational companies.
The individuals occupying these workspaces are absent, while their identities are hidden behind generic usernames.
90 Can AIs write a love song if they don’t feel emotions?
Yes, at a human’s request, they can create new things: write a poem, generate an image, or compose music... To create something new, they identify patterns, imitate them, and mix them. That’s why it’s important to choose carefully what we teach them and know how to use them wisely. Artists enjoy testing AIs, modifying them, and even creating other systems to design their works.
Yes, an AI can write a love song, even if it feels nothing. In fact, AIs have no emotions, nor consciousness: they simply analyze thousands of texts, music, or images, then identify patterns, imitate them, combine them, and create new ones.
It is thanks to these abilities that they can write a poem, generate an image, or compose a melody based on a human request. But their creativity is not magical: it depends on the data they are given and the rules they are taught. That’s why it’s important to carefully choose what we show them.
Artists, precisely, explore these limits. They use AI as a tool for creation, but also as a means to raise questions: what is a song worth without emotions? Can poetry be entrusted to a machine? And what remains of the human in AI-generated art?
92 What did the Sphinx look like when it still had its nose?
AIs can help archaeologists repair certain objects damaged over time. Artist Egor Kraft uses AI to complete missing parts of ancient sculptures or imagine new ones. He trained this AI with thousands of images of statues. The gaps are then filled using a 3D printer. In 2018, when this work was created, machines could sometimes make mistakes by creating impossible shapes, called “AI hallucinations.”
We don’t know exactly, but artificial intelligences can help us imagine it. By analyzing thousands of images of ancient sculptures, some AIs are able to reconstruct missing parts, like the Sphinx’s nose or a broken arm on a statue.
Artist Egor Kraft uses AI to complete fragmentary works. He trained it with a massive image dataset, then uses 3D printing to give physical shape to the parts the machine invented. Sometimes, these reconstructions are precise, but other times, they produce strange or impossible shapes, known as AI hallucinations.
Thus, AIs don’t necessarily retrieve historical truth, but they open imaginative and poetic paths to explore the past differently.
93
Egor Kraft
Born in 1986 in Leningrad, now Saint Petersburg (Russia)
CAS_05 Julia Mamaea — 2019
Crema Marfil marble, polyamide, machine learning algorithms, specific natural data training corpus
CAS_12.1 Caryatid Portrait
Crema Marfil marble, polyamide, machine learning algorithms
CAS_V08 Deep Portrait: 12 channel version — 2019
12 screens, metal structure, machine learning algorithms, specific training dataset
CAS_15.3 Deep Frieze — 2021
Breccia marble, concrete, machine learning algorithms, synthetic training data
CAS_V11 Unstable Portrait
Specific server, steel structure, machine learning algorithms, specific training data, video, color, silent, 7 min 47 s, monitor
Series Content Aware Studies, 2018–ongoing
With support from the Federal Ministry of Arts, Culture, Civil Service and Sport, Austria
With his Content Aware Studies, Egor Kraft explores generative AI models that can be used either to complete fragmentary historical objects or to imagine historical objects that could have existed but never did. For Content Aware Studies I, Kraft trained a model with thousands of 3D scans of sculptures and friezes from the Hellenistic and Roman periods, then used it to produce 3D printed elements to replace missing fragments of existing sculptures or to create potential new ones. With this project, Kraft pursues his exploration of a kind of “reverse archaeology” and “synthetic history.”
96 Do AIs know everything?
AIs do not recognize certain parts of the world, or even entire peoples, because they have been trained on incomplete images or information. Artist Nouf Aljowaysir highlights the fact that AIs ignore certain cultures. To emphasize this, she erases the faces of people not identified by AIs in old photographs from the Middle East. She then creates new ghostly silhouettes.
No, AIs do not know everything. They may even ignore entire peoples, because they only learn from the images and information we provide them. If this data is incomplete, biased, or too focused on certain regions of the world, then AIs do not recognize what lies outside that scope.
Artist Nouf Aljowaysir shows this clearly in her work: by working on old photographs from the Middle East, she observed that AIs do not recognize certain faces. She voluntarily erases them to represent this absence, this invisibility. She then creates ghostly silhouettes, as an echo of this forgotten memory.
This reminds us that AI is not neutral: it reflects human choices and the biases of our societies.
97
Nouf Aljowaysir
Born in 1993 in Riyadh (Saudi Arabia)
Salaf, 2021–2025
Installation composed of three parts:
Dataset, 2021, generated with U-2 Net
Machine Vision Research in Archival Images, 2021
Generative, 2025, generated with StyleGAN3
Archive images: Getty Museum, The Getty Research Institute, Ken and Jenny Jacobson Orientalist Photography Collection
The project Salaf (Ancestors) questions the exoticizing and colonial Western perception of the Middle East, from 19th-century photographic collections to current artificial vision systems. Nouf Aljowaysir edits old photographs from the Middle East, erasing the silhouettes to emphasize the corpus’ failure to properly represent the populations concerned. From these reworked images, she then trains GANs to produce new images with a haunting spectrality.
98 How can you make a film using only words?
In the 1920s and 1930s, a poet imagined films, even wrote down his ideas, but never shot anything. Artist Érik Bullot uses AI to turn these texts into images and produce photographs of these non-existent films. Based on the sentences entered into the computer, the AI generates images that sometimes appear very realistic, sometimes strange. Is it a memory of the past or a dream of the future?
With AI, we can now transform words into images. This is what artist Érik Bullot does: he uses texts written in the 1920s–1930s by a poet who dreamed of cinema but never filmed his ideas. Thanks to an artificial intelligence model, Bullot inputs these sentences into a computer that generates images.
Some images appear realistic, others are strange, almost dreamlike. These are imaginary films, made from words, never shot but brought to life by AI. This raises questions about our relationship with time: is it a reconstruction of the past, or a projection of the future?
Thus, AI becomes a poetic tool that gives form to works that remained only dreams.
98
Erik Bullot
Born in 1963 in Soissons (France)
Cinéma vivant, 2024
Digital prints of 12 images generated by the Lexica diffusion model
Filmmaker, writer, and researcher, Érik Bullot explores the history of unfinished and fragmented films as well as various manifestations of an “imaginary cinema,” mental and potential, at the intersection of literature and parapsychology, technical experiments, and mediumistic practices. Generative AI, with its specific temporality — that of a “past conditional, close to a ‘would-have-been’” — brings these cinematic promises to life. In this image series, using the Stable Diffusion text-to-image model, Bullot tries to give form, from textual prompts, to the utopian visions of symbolist poet Saint-Pol-Roux (1861–1940), as formulated in the notes of his Cinéma vivant collection during the 1920s and 1930s.
102 Can an image be fully translated into words? Estampa – Ekphrasis, 2025
4K video installation, color, sound, created with a selection of artificial vision programs
With this installation, the Estampa collective continues its exploration of new algorithmic relationships between words and images established by analytical and generative AI models. In Ekphrasis, excerpts from various films are submitted to a highly complex and layered algorithmic description process: objects, faces, and emotions are detected and categorized by artificial vision and facial analysis systems, while image-to-text models are used to generate detailed descriptions and analyses. The work draws our attention to the poetic potential inherent in the radical impossibility of fully translating images into words, and vice versa.
Not completely. That’s what the work Ekphrasis by the Estampa collective shows. This video installation uses artificial intelligence programs to analyze film excerpts: it detects objects, faces, emotions, then describes them in words using image-to-text models.
But even with advanced technologies, translating an image into words remains imperfect. AIs can analyze details, but the deeper meaning, feelings, or visual ambiguity often escape description. The work thus shows that images say more than words can capture, and that every translation leaves blurred zones open to imagination.
By exploring these limits, Ekphrasis invites us to reflect on what we see, what we say, and what we cannot say.
100
Words and images
Text-to-image and text-to-video diffusion models now make it possible, for the first time in history, to automatically create still or moving images solely from written instructions, or prompts. The use of these prompts radically transforms the classical art history practice of image description, ekphrasis, which in this new context becomes operational: images are generated based on a prior description.
Conversely, image-to-text models can generate detailed textual descriptions from still or moving images, resulting in an automatic ekphrasis.
Thus, a completely new algorithmic entanglement between words and images, the sayable and the visible, is established. The various operations of algorithmic conversion between one and the other inevitably involve shifts, approximations, and errors, which open up a whole range of poetic and visual experiments for artists, as demonstrated by the Estampa collective’s installation.
105 Can an AI be tricked?
That’s what a group of researchers managed to do by asking ChatGPT to repeat the word “poem” over and over. The AI started revealing parts of texts it had been trained on: ads, emails, even the Bible! Artist Julien Prévieux plays with the system’s limits and turns the error into a sound artwork. He shows how technology influences our world, sometimes absurdly. What else could be created by hijacking other AIs?
Yes, you can sometimes trick an AI by exploiting its flaws. For example, a group of researchers asked ChatGPT to repeat the word “poem” endlessly. As a result, the AI began to recite excerpts of texts it had learned during training — advertisements, emails, even Bible passages...
Artist Julien Prévieux turned this error into a sound artwork. He thus reveals the limits and blind spots of these systems and transforms them into artistic creation. By hijacking how AI works, he invites us to reflect on its opaque functioning and on how technology influences our lives, sometimes absurdly or unintentionally.
This kind of artistic hack raises a question: what other creative forms could emerge by playing with AI’s mistakes?
106
Julien Prévieux
Born in 1974 in Grenoble (France)
Poem Poem Poem Poem Poem, 2024–2025
Continuous sound piece, adhesive lettering
This work highlights a flaw in the technology of the famous conversational agent ChatGPT. A hack, devised by a group of researchers, consisted in asking the chatbot to repeat the words “poem” or “book” endlessly. The system, which has since been fixed, would respond by revealing texts from its training data. These fragments, coming from sources as varied as ads, legal disclaimers, restaurant menus, the Bible, private emails, and code scripts, expose what is collected — often without consent — to build the massive datasets used to train chatbots. Julien Prévieux presents these texts using artificial voices that read or sing a composition of “ready-made” poems.
On the wall, a series of poems designed by the artist complements the sound piece. These texts, developed from the mechanisms of large language models (tokenization, word embedding, context window…), reveal their inherent limits while producing surprising semantic effects.
107 How can a machine learn?
To learn to recognize a cat, for example, AIs must record millions of images of cats on the Internet and classify them. They don’t understand like a human and have never petted a cat. They analyze pixels. In this video, artist Trevor Paglen makes this process visible by showing us the multitude of images used to train AIs. The music is also composed of sounds used to teach AIs to recognize noises.
A machine learns by analyzing data, not by feeling. For example, for an artificial intelligence (AI) to recognize a cat, it must be trained on millions of images of cats. It doesn’t understand what a cat is like a human does — it’s never seen, touched, or heard one. It processes pixels, detects shapes and patterns, and classifies them.
In the work of Trevor Paglen, this process becomes visible: he shows the mass of images used for this training. Even the music in the video is made of sounds used to train AIs to recognize noises. This artistic work highlights a fundamental truth: AIs learn through mechanical accumulation of data, without any sensory experience of the world.
108
Trevor Paglen
Born in 1974 in Camp Springs (Maryland, United States)
Behold These Glorious Times!, 2017
Single-channel video installation, color, stereo sound, 10 min, loop
Original music: Holly Herndon
Courtesy of the artist, Altman Siegel, San Francisco, and the Pace Gallery
The video installation Behold These Glorious Times! presents two types of images, projected in a mosaic with a frantic editing rhythm. The first come from massive datasets used to train artificial vision systems to recognize objects, faces, gestures, and emotions. The others reveal to the human eye what artificial vision systems “see” when analyzing the data they are given. The installation is part of a series of works in which Trevor Paglen addresses the field of artificial vision to analyze its epistemological and political stakes.
https://paglen.studio/2020/04/09/behold-these-glorious-times/
The audiovisual installation explores how artificial intelligence systems learn to “see” and make sense of the world. Created with hundreds of thousands of images from AI training datasets—collections designed to teach
Additional Questions
Made by Gaby David
SECTION 1: OBSERVE & ANALYZE
- Title of the artwork:
- Artist or Project Name:
- Describe the visual patterns or style of the artwork:
(E.g., repetition, abstraction, symmetry, realism, distortion, etc.)
- What type of dataset might have been used to train the AI model that generated this artwork?
☐ News photography
☐ Portraits or selfies
☐ Surveillance footage
☐ Social media images
☐ Historical archives
☐ Scientific/medical images
☐ Other: ______________________________________________
Explain your reasoning:
- What potential biases or cultural assumptions might be embedded in the dataset or algorithm?
- What kind of AI technique might have been used?
☐ GAN (Generative Adversarial Network)
☐ Style Transfer
☐ Diffusion Model
☐ DeepDream
☐ Image Classification
☐ Not sure / Other: ______________________________________
Why do you think so?
For memo - Glossary of AI Image Techniques
- GAN (Generative Adversarial Network)
- A GAN is a model that generates new images by learning from a dataset. It has two networks:
- A generator that tries to create fake images.
- A discriminator that tries to detect whether an image is real or fake.
They “compete” until the generator learns to produce realistic-looking images.
Example: Portraits of people who don’t exist.
- A GAN is a model that generates new images by learning from a dataset. It has two networks:
- Style Transfer
- This technique takes the content of one image (e.g., a photo of a person) and mixes it with the style
of another (e.g., a Van Gogh painting). The result is a new image that looks like the first but painted
in the style of the second. Example: A selfie that looks like it was painted by Suzanne Valadon.
- This technique takes the content of one image (e.g., a photo of a person) and mixes it with the style
- Diffusion Model
- Diffusion models generate images from random noise, gradually refining them into meaningful
visuals. They are trained by learning how to reverse the process of adding noise to real images.
Example: Tools like DALL·E, Stable Diffusion, or Midjourney use this method.
- DeepDream
DeepDream amplifies the patterns that a neural network ‘sees’ in an image. It creates surreal and
psychedelic results with repeating features like eyes or animals.
Example: A city photo transformed into a dreamlike scene full of spirals and dog faces.
- Image Classification
This technique labels or categorizes existing images based on what the AI recognizes. It does not
generate images but analyzes them using trained examples. Example: Determining if an image
contains a cat, a tree, or a car.
SECTION 2: CRITICAL REFLECTION
Summarize one key insight from your analysis:
Choose a category for your reflection:
☐ Most Biased Dataset
☐ Most Mysterious Training Logic
☐ Most Thought-Provoking Output
PLEASE EXPLAIN :
SECTION 3: DEBRIEF & DISCUSSION (in pairs) + WR individually
- What did your group learn about the relationship between data and aesthetics?
- How do you evaluate authorship in the context of AI-generated art?
- Is this a “problem”? Yes/No - Why, how, when, why…
- Can you “see” the algorithm/AI in the artwork? What is visible or hidden? How is that a
problem ?
SECTION 4: BEYOND THE IMAGE — AI & SOCIETY
(To be completed individually or in small groups)
- Problem-Solving & Creativity
If this AI system were repurposed for a real-world application (e.g., urban planning, health
communication, education), what would be the ethical or practical challenges it might face?
Propose a creative solution to one of these challenges.
- Collaboration & Cooperation
AI-generated art often involves teams of engineers, designers, and curators.
Imagine you are part of such a team: What roles would be essential in responsibly producing and
presenting AI art, and how should they collaborate to ensure inclusivity, transparency, and artistic
integrity?
- Critical Thinking
Some argue that AI-generated images can reinforce stereotypes or produce misleading
representations of reality.
What steps could be taken—at the dataset, algorithmic, or exhibition level—to critically mitigate
these effects while still enabling innovation?